依靠随机矩阵理论(RMT),本文研究了不对称的秩序与高斯噪声的D $ Spiked张量模型。使用奇异矢量的变分定义和[LIM,2005]的值,我们表明所考虑的模型的分析归结为分析了由此构造的等效尖刺对称\ XYLY矩阵的分析研究的统计{凹陷}与最佳等级-1近似相关的奇异矢量。我们的方法允许确切地表征几乎肯定的渐近奇异值和相应的奇异矢量与真正的尖峰组件的对齐,当$ \ frac {n_i} {\ sum_ {j = 1} ^ d n_j} \ to c_i \中[0,1] $ $ N_I $的张量尺寸。与大多数依赖于统计物理学的工具依赖于统计物理学的其他作品相比,我们的结果仅依赖于古典RMT工具,如Stein的引理。最后,有关尖刺随机矩阵的经典RMT结果被恢复为特定情况。
translated by 谷歌翻译
文章提出了和理论上分析了\ emph {计算有效}多任务学习(MTL)扩展的流行主成分分析(PCA)的监督学习方案\ Cite {Barshan2011Supervised,Bair2006Prediction}。该分析显示(i)默认学习可以通过遭受\ {负转移}来急剧地失败,但(ii)对数据标签的简单反措施避免了负转移,并必须导致改进的性能。支持合成和实际数据基准测试的实验表明,该方法通过最先进的MTL方法实现了可比性的性能,但是在\ EMPH {显着降低计算成本}。
translated by 谷歌翻译
张量模型在许多领域中起着越来越重要的作用,特别是在机器学习中。在几种应用中,例如社区检测,主题建模和高斯混合物学习,必须估算噪声张量的低级别信号。因此,了解该信号的估计器的基本限制不可避免地要求研究随机张量。最近,在大维限制中,该主题取得了实质性进展。然而,其中一些最重要的结果(尤其是对突然的相变(相对于信噪比)的精确表征),该表现控制着对称等级的最大可能性(ML)估计器的性能 - 具有高斯噪声的模型 - 基于平均场自旋玻璃理论得出,非专家不容易访问。在这项工作中,我们依靠标准但强大的工具开发出一种截然不同,更基本的方法,这是由随机矩阵理论的多年进步带来的。关键思想是研究由给定随机张量的收缩引起的随机矩阵的光谱。我们展示了如何访问随机张量本身的光谱属性。对于上述排名衡量模型,我们的技术产生了迄今未知的固定点方程,其解决方案与第三阶情况下的相变阈值高于相变阈值的ML估计器的渐近性能。数值验证提供了证据,表明订单4和5相同,导致我们猜想,对于任何顺序,我们的定点方程等于已知的ML估计性能的表征,这些表现通过依靠旋转玻璃而获得。此外,我们的方法阐明了ML问题景观的某些特性,可以扩展到其他模型,例如不对称和非高斯。
translated by 谷歌翻译
Contrastive representation learning has proven to be an effective self-supervised learning method for images and videos. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations between the instances, or semantic similarity and dissimilarity, that contrastive learning harms by considering all negatives as noise. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities. We validate empirically our approach on both image and video representation learning. We show that SCE performs competitively with the state of the art on the ImageNet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks. We also show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks.
translated by 谷歌翻译
The optimal layout of a complex system such as aerospace vehicles consists in placing a given number of components in a container in order to minimize one or several objectives under some geometrical or functional constraints. This paper presents an extended formulation of this problem as a variable-size design space (VSDS) problem to take into account a large number of architectural choices and components allocation during the design process. As a representative example of such systems, considering the layout of a satellite module, the VSDS aspect translates the fact that the optimizer has to choose between several subdivisions of the components. For instance, one large tank of fuel might be placed as well as two smaller tanks or three even smaller tanks for the same amount of fuel. In order to tackle this NP-hard problem, a genetic algorithm enhanced by an adapted hidden-variables mechanism is proposed. This latter is illustrated on a toy case and an aerospace application case representative to real world complexity to illustrate the performance of the proposed algorithms. The results obtained using the proposed mechanism are reported and analyzed.
translated by 谷歌翻译
Machine Learning for Source Code (ML4Code) is an active research field in which extensive experimentation is needed to discover how to best use source code's richly structured information. With this in mind, we introduce JEMMA, an Extensible Java Dataset for ML4Code Applications, which is a large-scale, diverse, and high-quality dataset targeted at ML4Code. Our goal with JEMMA is to lower the barrier to entry in ML4Code by providing the building blocks to experiment with source code models and tasks. JEMMA comes with a considerable amount of pre-processed information such as metadata, representations (e.g., code tokens, ASTs, graphs), and several properties (e.g., metrics, static analysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2 million classes and over 8 million methods. JEMMA is also extensible allowing users to add new properties and representations to the dataset, and evaluate tasks on them. Thus, JEMMA becomes a workbench that researchers can use to experiment with novel representations and tasks operating on source code. To demonstrate the utility of the dataset, we also report results from two empirical studies on our data, ultimately showing that significant work lies ahead in the design of context-aware source code models that can reason over a broader network of source code entities in a software project, the very task that JEMMA is designed to help with.
translated by 谷歌翻译
Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large-scale experiments are becoming increasingly expensive. However, previous work on scaling laws has primarily used private data \& models or focused on uni-modal language or vision learning. To address these limitations, we investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository. Our large-scale experiments involve models trained on up to two billion image-text pairs and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing, and end-to-end fine-tuning. We find that the training distribution plays a key role in scaling laws as the OpenAI and OpenCLIP models exhibit different scaling behavior despite identical model architectures and similar training recipes. We open-source our evaluation workflow and all models, including the largest public CLIP models, to ensure reproducibility and make scaling laws research more accessible. Source code and instructions to reproduce this study will be available at https://github.com/LAION-AI/scaling-laws-openclip
translated by 谷歌翻译
In this paper, we investigate the problem of multi-domain translation: given an element $a$ of domain $A$, we would like to generate a corresponding $b$ sample in another domain $B$, and vice versa. Acquiring supervision in multiple domains can be a tedious task, also we propose to learn this translation from one domain to another when supervision is available as a pair $(a,b)\sim A\times B$ and leveraging possible unpaired data when only $a\sim A$ or only $b\sim B$ is available. We introduce a new unified framework called Latent Space Mapping (\model) that exploits the manifold assumption in order to learn, from each domain, a latent space. Unlike existing approaches, we propose to further regularize each latent space using available domains by learning each dependency between pairs of domains. We evaluate our approach in three tasks performing i) synthetic dataset with image translation, ii) real-world task of semantic segmentation for medical images, and iii) real-world task of facial landmark detection.
translated by 谷歌翻译
The Codex model has demonstrated extraordinary competence in synthesizing code from natural language problem descriptions. However, in order to reveal unknown failure modes and hidden biases, such large-scale models must be systematically subjected to multiple and diverse evaluation studies. In this work, we evaluate the code synthesis capabilities of the Codex model based on a set of 115 Python problem statements from a popular competitive programming portal: HackerRank. Our evaluation shows that Codex is indeed proficient in Python, solving 96% of the problems in a zero-shot setting, and 100% of the problems in a few-shot setting. However, Codex exhibits clear signs of generating memorized code based on our evaluation. This is alarming, especially since the adoption and use of such models could directly impact how code is written and produced in the foreseeable future. With this in mind, we further discuss and highlight some of the prominent risks associated with large-scale models of source code. Finally, we propose a framework for code-synthesis evaluation using variations of problem statements based on mutations.
translated by 谷歌翻译
The spread of misinformation is a prominent problem in today's society, and many researchers in academia and industry are trying to combat it. Due to the vast amount of misinformation that is created every day, it is unrealistic to leave this task to human fact-checkers. Data scientists and researchers have been working on automated misinformation detection for years, and it is still a challenging problem today. The goal of our research is to add a new level to automated misinformation detection; classifying segments of text with persuasive writing techniques in order to produce interpretable reasoning for why an article can be marked as misinformation. To accomplish this, we present a novel annotation scheme containing many common persuasive writing tactics, along with a dataset with human annotations accordingly. For this task, we make use of a RoBERTa model for text classification, due to its high performance in NLP. We develop several language model-based baselines and present the results of our persuasive strategy label predictions as well as the improvements these intermediate labels make in detecting misinformation and producing interpretable results.
translated by 谷歌翻译